Specification vs Likelihood revisited
Several people have been kind enough to comment on my essay on this subject on talk.reason (http://www.talkreason.org/articles/likely.cfm).
I would like to respond to two or three comments here.
1. First, a correction to simple error of fact. Thanks to Rob Igo for pointing this out. On the first page I wrote that the probability of a Royal Flush given a random deal is 1 in 2.5 million. This is the actually the probability of a Royal Flush in a given suit. The probability of a Royal Flush in any suit is four times higher – about 1 in 650,000. I have corrected the essay on talk.reason. It makes no difference to logic of the argument or the rest of the essay.
2. There have been some comments to the effect that a single Royal Flush is not a good example as the probability is not nearly low enough. When the ID community are talking about low probabilities they mean probabilities in the order of 1 in 10^100 or even lower (i.e. about 25 Royal Flushes in the same suit in a row).
I cannot see the relevance of this and can only think I was not writing clearly enough. As I say on the first page, I would dismiss the possibility that this was a random deal if it was a Royal Flush. I would be even more prepared to dismiss a random deal if it were 25 Royal Flushes in a row. The essay does not challenge this conclusion. It seeks to understand why we dismiss a random deal under these circumstances given that the observed outcome is no more improbable than any other sequence of hands. Dembski tries to account for it via specification which turns out to be quite a complicated and difficult thing to define and yet has no justification. I am pointing out that comparison of likelihoods is an established alternative with a clear justification.
Finally some comments from Salvador (I am afraid I don't know his surname):
Mark,
See my post where I state that information poor stochastic processes by definition are incompatible with highly specified events (information rich).
http://www.uncommondescent.com/index.php/archives/1285#comment-47298
I encourage you to ponder why this should be evidently true.
Salvador
Comment by scordova — July 10, 2006 @ 1:30 am
I am afraid I don’t find this evidently true and the thread that Salvador points to throws no light on this. I think it amounts to saying that stochastic processes which are not based on recognisable patterns are unlikely to produce outcomes with recognisable patterns. This may well be true but such processes are unlikely to produce any given outcome - whether the outcome corresponds to a recognisable pattern or not. This seems to be little more than a restatement of the problem the essay is trying to solve.
Salvador also wrote (starting by quoting something I wrote)
“However improbable that outcome, any other set of three or fifty hands is equally improbable”
True, but write down a specification of an exact hand. It should have the probability of a
Royal flush in spades = 1 / [ 52! / (5! (52-5)!) ]
number of bits is log2[ 52! / (5! (52-5)!) ]
Have someone completely shuffle the cards, and then deal them to you. What do you think the chances are your specification will be hit by a random shuffle?
ANY specification works as long as it is detachable and improbable.
Specification is important. How could one possibly not use specification in a copyright infringement suit (which as valid instance of the EF)?
Again I find this comment confusing. I am not denying that the chances of a Royal Flush in spades or any other given hand is very low.
He continues:
Finally, your critique obfuscated Dembski’s work rather than clarifying it. Your treatment of a no-aces hand was totally off-base.
What Dembski said:
NFL page 78:
Finally, in placing targets on a wall, it makes no sense to place targets within targets if hitting any target yields the same prize. If one target is included within another target and if all that is at issue is whether some target was hit, then all that matters is the biggest target. The small ones that sit inside bigger ones are, in that case, merely along for the ride.
For example, the spade royal flush is part of the class of royal flush. If our target of interest is any royal flush then hitting a spades royal flush is a sufficient but not necessary condition for getting a royal flush. Thus one does not have to explicitly calculate the odds of a spades royal flush, but merely the simpler class of royal flush. If a spades royal flush appears, one still has at least the surprisal value of a royal flush, and if ones detection threshhold is royal flush, then a spades royal flush(if that’s more special to you) is merely icing on the cake.
The odds of a royal flush target being hit are:
Royal flush = 4 / [ 52! / (5! (52-5)!) ]
number of bits is log2[ [ 52! / (5! (52-5)!) ] / 4 ]
When you started talking about no-ace hands, your description of Dembski’s ideas became Flawed Utterly Beyond All Recognition (FUBAR). What reason is there consider no-ace hands? The probability of that is huge compared to royal flushes. You were deliberately choosing specifications in that case which would not be very helpful in eliminating chance explanations, and thus your treatment of the matter was flawed.
Again I think I must be guilty of writing unclearly. I use the example of “hands with no aces” simply to illustrate that the definition of specification changes from page 18 to page 19 of Dembski’s paper. The definition on page 18 would include such a hand as part of the specificational resources of a Royal Flush. The definition on page 19 would not. I accept the definition on page 19 as being the one that Dembski wishes to use – so it is not an issue.
I hope this clarifies some issues.
I would like to respond to two or three comments here.
1. First, a correction to simple error of fact. Thanks to Rob Igo for pointing this out. On the first page I wrote that the probability of a Royal Flush given a random deal is 1 in 2.5 million. This is the actually the probability of a Royal Flush in a given suit. The probability of a Royal Flush in any suit is four times higher – about 1 in 650,000. I have corrected the essay on talk.reason. It makes no difference to logic of the argument or the rest of the essay.
2. There have been some comments to the effect that a single Royal Flush is not a good example as the probability is not nearly low enough. When the ID community are talking about low probabilities they mean probabilities in the order of 1 in 10^100 or even lower (i.e. about 25 Royal Flushes in the same suit in a row).
I cannot see the relevance of this and can only think I was not writing clearly enough. As I say on the first page, I would dismiss the possibility that this was a random deal if it was a Royal Flush. I would be even more prepared to dismiss a random deal if it were 25 Royal Flushes in a row. The essay does not challenge this conclusion. It seeks to understand why we dismiss a random deal under these circumstances given that the observed outcome is no more improbable than any other sequence of hands. Dembski tries to account for it via specification which turns out to be quite a complicated and difficult thing to define and yet has no justification. I am pointing out that comparison of likelihoods is an established alternative with a clear justification.
Finally some comments from Salvador (I am afraid I don't know his surname):
Mark,
See my post where I state that information poor stochastic processes by definition are incompatible with highly specified events (information rich).
http://www.uncommondescent.com/index.php/archives/1285#comment-47298
I encourage you to ponder why this should be evidently true.
Salvador
Comment by scordova — July 10, 2006 @ 1:30 am
I am afraid I don’t find this evidently true and the thread that Salvador points to throws no light on this. I think it amounts to saying that stochastic processes which are not based on recognisable patterns are unlikely to produce outcomes with recognisable patterns. This may well be true but such processes are unlikely to produce any given outcome - whether the outcome corresponds to a recognisable pattern or not. This seems to be little more than a restatement of the problem the essay is trying to solve.
Salvador also wrote (starting by quoting something I wrote)
“However improbable that outcome, any other set of three or fifty hands is equally improbable”
True, but write down a specification of an exact hand. It should have the probability of a
Royal flush in spades = 1 / [ 52! / (5! (52-5)!) ]
number of bits is log2[ 52! / (5! (52-5)!) ]
Have someone completely shuffle the cards, and then deal them to you. What do you think the chances are your specification will be hit by a random shuffle?
ANY specification works as long as it is detachable and improbable.
Specification is important. How could one possibly not use specification in a copyright infringement suit (which as valid instance of the EF)?
Again I find this comment confusing. I am not denying that the chances of a Royal Flush in spades or any other given hand is very low.
He continues:
Finally, your critique obfuscated Dembski’s work rather than clarifying it. Your treatment of a no-aces hand was totally off-base.
What Dembski said:
NFL page 78:
Finally, in placing targets on a wall, it makes no sense to place targets within targets if hitting any target yields the same prize. If one target is included within another target and if all that is at issue is whether some target was hit, then all that matters is the biggest target. The small ones that sit inside bigger ones are, in that case, merely along for the ride.
For example, the spade royal flush is part of the class of royal flush. If our target of interest is any royal flush then hitting a spades royal flush is a sufficient but not necessary condition for getting a royal flush. Thus one does not have to explicitly calculate the odds of a spades royal flush, but merely the simpler class of royal flush. If a spades royal flush appears, one still has at least the surprisal value of a royal flush, and if ones detection threshhold is royal flush, then a spades royal flush(if that’s more special to you) is merely icing on the cake.
The odds of a royal flush target being hit are:
Royal flush = 4 / [ 52! / (5! (52-5)!) ]
number of bits is log2[ [ 52! / (5! (52-5)!) ] / 4 ]
When you started talking about no-ace hands, your description of Dembski’s ideas became Flawed Utterly Beyond All Recognition (FUBAR). What reason is there consider no-ace hands? The probability of that is huge compared to royal flushes. You were deliberately choosing specifications in that case which would not be very helpful in eliminating chance explanations, and thus your treatment of the matter was flawed.
Again I think I must be guilty of writing unclearly. I use the example of “hands with no aces” simply to illustrate that the definition of specification changes from page 18 to page 19 of Dembski’s paper. The definition on page 18 would include such a hand as part of the specificational resources of a Royal Flush. The definition on page 19 would not. I accept the definition on page 19 as being the one that Dembski wishes to use – so it is not an issue.
I hope this clarifies some issues.
7 Comments:
I still can't make sense of your objection.
If you bump up the number of royal flushes in a row to 25 so that the chance of it approaches the universal probability bound of 1/10^150 you hardly need a meaningful specification. Pick any random sequence of 150 cards and the universe simply isn't old enough or big enough for that particular sequence stand any reasonable chance of appearing.
The example starts to fall apart after that because the probabilistic resources for poker hands in the real world are insignificant. We can at least make some ballpark calculations as to how many bacteria have reproduced during the life the planet and how often mutations occur. To put it another way, if every cell that ever lived on the planet earth were dealt 25 hands of cards, the chance that any of them would get any particular sequence of 25 hands you care to independently name are virtually nil.
This is why you can't willy nilly relate a specification for a 1/10^5 chance with a 1/10^150 chance. Specification doesn't exist in a vacuum. Complexity and probabilistic resources are just as important. You're making specification far too complex and important. Specification is any independently given pattern. Independently name *any* string of 150 cards and you have a specification. Or name just one card or a million. It is still a specification. Absent complexity and probabilistic resources the specification is meaningless.
Dave - thanks for your comment.
I must admit I am running out of ways to try and explain this. I was using a Royal Flush in spades as an example of something we would reject as too improbable to be the result of a random deal. It was an example that Dembski also uses. I was not attempting to relate it complexity, probability bounds, evolution or biology.
Perhaps it will help if I rewrite the example?
"Suppose the computer started by dealing me 25 consecutive Royal Flushes in spades. I would have no hesitation in rejecting the programme as not providing a random deal. Yet if the deal was random then the probability of getting that set of 25 hands is the same as the probability of any other 25 hands. What is it that makes us reject the sequence of 25 Royal Flushes in spades? Dembksi would say it is because it is specified. Specification proves to be slippery concept that Dembski has define in several slightly different ways (this latest one requires 41 pages). I would argue that it is simply because there are other explanations with a far, far greater likelihood."
Dave, do you understand Mark's point yet?
Your claim that "specification is any independently given pattern" reveals that you haven't read the paper that Mark is critiquing. Have you read No Free Lunch yet? Which of Dembski's works have you read?
Secondclass
While I have your attention (and knowing how bright you are :-)). Do you understand the argument for a Universal Probability Bound? I read Seth Lloyd's paper (which Dembski references) and he does indeed estimate that the observable universe has performed 10^120 logical operations since the big bang. But what significance does that figure have for the probability of an outcome? Lloyd also estimates the universe can store 10^90 bits. So, imagine the big bang had just happened so there had only been time for 100 logical operations on those 10^90 bits - would the UPB at that stage be 100 to 1?
I am not mocking the concept of the UPB. I am genuinely confused.
LOL. Thanks for the compliment, but my intellect is mediocre at best. I can't perform DaveScot's incredible mental feats, like memorizing the World Book Encyclopedia in the 5th grade.
As far as the UPB, my understanding is that it puts an upper bound on the number of events that have occurred in the history of the universe. So, yes, this bound has presumably grown since the Big Bang.
The UPB solves the problems of arbitrary significance levels and probabilistic resources. Someone who performs a lot of tests will occasionally come across a p-value less than .05 by pure chance, resulting in a Type I error. But we would have to witness on the order of 10^150 chance events in order to see one with a p-value of less than 10^-150. Since even the universe itself hasn't witnessed that many events, setting our significance level to 10^-150 theoretically eliminates Type I errors.
At least that's my understanding.
Secondclass thanks.Your explanation is clear and as logical as the concept permits.
cheap car insurance quote uk
car insurance rats
car insurance n
buy car insurance
best car insurance quote
compare car insurance quote
cheap car insurance quote
low car insurance
diamond car insurance
car insurance estimate
general car insurance
cheap online car insurance quote
auto cheap insurance
car insurance comparison
online auto insurance quote
car insurance los angeles
buy car insurance
low car insurance
compare car insurance rate
usaa car insurance
direct line car insurance
cheap car insurance rate
low cost car insurance
california car insurance
cheap car insurance quote
car insurance houston
florida car insurance
churchil
l car insurance
affordable car insurance
agent car company home insurance life quote rate
http://cheap-car-insurance.quickfreehost.com
Random Keyword: :)
online car insurance rate
Post a Comment
<< Home